10 - Pattern Recognition (PR) [ID:2504]
50 von 997 angezeigt

[MUSIK]

can we start so welcome back to the Tuesday lecture the number of students significantly

increased again that's nice to see so welcome back to the lecture hall where

we have the professional video equipment so you have no need to show up here you have the

perfect videos at home before we continue in the text let me make two comments

one comment one comment is on the big picture so we

will develop the big picture and the second comment with respect

to 1 remark I made yesterday regarding the row vectors and

column vectors of phi I looked at it again and I think I was not precise

enough actually I was wrong by saying it doesn't matter whether they are the rows or the columns

it has to be the row of the phi matrix but I'll explain to you why the

simple argument for that okay so let's start with the big picture we this semester

we talk about pattern recognition and we address hopefully quite modern

developments in the field and before we can talk about

recent achievements we have to come up with a solid

base and a solid base is the Bayes classifier which

gives us a good understanding of classifiers base classifier and

the limitations of classifiers the Bayesian classifier is important to

know because this classifier is optimal with respect to the

average loss or the average costs we associate with misclassification

or classifications so we have introduced the concepts of loss

functions loss function and we also have introduced the Bayesian decision rule the

Bayesian decision rule is doing what the Bayesian decision rule it compares PX it's

computing the posterior probabilities and the sites

for the class for the highest a

posterior probability and just for for notation here

this is the class number and this here

is the vector is the vector the feature

feature vector the feature vector okay X the feature vector Y

is the class then we talked about the differentiation or the

difference of classification and regression regression and equip with that

we started to look into logistic regression we pointed out that

the classification problem can be decomposed in multiple regression problems regression

in a sense that we have to estimate the decision boundary

so logistic regression regression gave us an excellent

understanding of the relationship between the decision

boundary and its mathematical representation as a

level set function and the posterior probability

I just want to remind you that

we have seen that the posterior of classes X can be written

in terms of the sigmoid function which is 1 over 1 plus

e to the power of Y times F of X if Y

are the class numbers plus 1 and minus 1 so once

you have the zero level set F of X is equal

to 0 given you can write down right away the posterior

probability associated with this decision boundary and then you can say

if this is the decision boundary of the of the Bayesian

classifier we get this posterior probability for class one and that

probability for class minus one good you see we

are not so fast in the lecture

Teil einer Videoserie :

Zugänglich über

Offener Zugang

Dauer

01:29:12 Min

Aufnahmedatum

2012-11-13

Hochgeladen am

2012-12-04 09:10:07

Sprache

en-US

Einbetten
Wordpress FAU Plugin
iFrame
Teilen